时间序列形状是最近发现对时间序列聚类有效(TSC)有效的歧视子序列。形状方便地解释簇。因此,TSC的主要挑战是发现高质量的可变长度形状以区分不同的簇。在本文中,我们提出了一种新型的自动编码器窗帘方法(AutoShape),这是第一次利用自动编码器和塑形器以不受欢迎的方式确定形状的研究。自动编码器的专门设计用于学习高质量的形状。更具体地说,为了指导潜在的表示学习,我们采用了最新的自我监督损失来学习不同变量的可变长度塑形塑形(时间序列子序列)的统一嵌入,并提出多样性损失,以选择歧视嵌入的嵌入方式统一空间。我们介绍了重建损失,以在原始时间序列空间中恢复形状,以进行聚类。最后,我们采用Davies Bouldin指数(DBI),将学习过程中的聚类性能告知AutoShape。我们介绍了有关自动赛的广泛实验。为了评估单变量时间序列(UTS)的聚类性能,我们将AutoShape与使用UCR存档数据集的15种代表性方法进行比较。为了研究多元时间序列(MTS)的性能,我们使用5种竞争方法评估了30个UEA档案数据集的AutoShape。结果证明了AutoShape是所有比较的方法中最好的。我们用形状来解释簇,并可以在三个UTS案例研究和一个MTS案例研究中获得有关簇的有趣直觉。
translated by 谷歌翻译
与传统的基于模型的故障检测和分类(FDC)方法相比,深神经网络(DNN)被证明对航空航天传感器FDC问题有效。但是,在训练中消耗的时间是DNN的过度,而FDC神经网络的解释性分析仍然令人难以置信。近年来,已经研究了一个称为基于图像缺陷的智能FDC的概念。这个概念主张将传感器测量数据堆叠到图像格式中,然后将传感器FDC问题转换为堆叠图像上的异常区域检测问题,这很可能很可能借用了机器视觉领域的最新进展。尽管在基于图像缺陷的智能FDC研究中声称有希望的结果,但由于堆叠图像的尺寸较低,使用了小的卷积核和浅DNN层,这阻碍了FDC性能。在本文中,我们首先提出了一种数据增强方法,该方法将堆叠的图像膨胀到更大的尺寸(与机器视觉领域中开发的VGG16网的通讯)。然后,通过直接对VGG16进行微调训练FDC神经网络。为了截断和压缩FDC净大小(因此其运行时间),我们在微调网上进行修剪。还采用了类激活映射(CAM)方法,以解释FDC NET的解释性分析以验证其内部操作。通过数据增强,VGG16的微调以及模型修剪,本文开发的FDC网络声称,在5个飞行条件下(运行时间26 ms),在4架飞机上,FDC精度为98.90%。 CAM结果还验证FDC Net W.R.T.它的内部操作。
translated by 谷歌翻译
在本文中,提出了一种新型的数据驱动方法,称为“增强图像缺陷”,用于飞机空气数据传感器(AD)的故障检测(FD)。典范飞机空气数据传感器的FD问题,开发了基于深神经网络(DNN)的边缘设备上的在线FD方案。首先,将飞机惯性参考单元测量作为等效输入,可扩展到不同的飞机/飞行案件。收集了与6种不同的飞机/飞行条件相关的数据,以在培训/测试数据库中提供多样性(可伸缩性)。然后提出了基于DNN的飞行条件预测的增强图像缺乏。原始数据被重塑为用于卷积操作的灰度图像,并分析并指出了增强的必要性。讨论了不同种类的增强方法,即翻转,重复,瓷砖及其组合,结果表明,在图像矩阵的两个轴上的所有重复操作都会导致DNN的最佳性能。基于GRAD-CAM研究了DNN的可解释性,这提供了更好的理解并进一步巩固DNN的鲁棒性。接下来,DNN型号,具有增强图像缺陷数据的VGG-16将针对移动硬件部署进行了优化。修剪DNN后,具有高精度(略微上升0.27%)的轻质模型(比原始VGG-16小98.79%),并获得了快速速度(时间延迟减少87.54%)。并实施了基于TPE的DNN的超参数优化,并确定了超参数的最佳组合(学习速率0.001,迭代时期600和批次尺寸100的最高精度为0.987)。最后,开发了基于Edge设备Jetson Nano的在线FD部署,并实现了飞机的实时监控。我们认为,这种方法是针对解决其他类似领域的FD问题的启发性。
translated by 谷歌翻译
图表神经架构搜索已在最近成功应用于非欧几里德数据上成功应用的图形神经网络(GNNS)得到了很多关注。但是,探索庞大的搜索空间中的所有可能的GNN架构都太耗时或无法对大图数据进行耗时或不可能。在本文中,我们提出了一个平行的图形架构搜索(GraphPas)图形神经网络的框架。在GraphPas中,我们通过设计基于共享的演进学习来探索搜索空间,可以在不失去准确性的情况下提高搜索效率。此外,架构信息熵是动态采用的突变选择概率,这可以减少空间探索。实验结果表明,GraphPas以效率和准确性同时占据了最先进的模型。
translated by 谷歌翻译
表示像素位移的光流量广泛用于许多计算机视觉任务中以提供像素级运动信息。然而,随着卷积神经网络的显着进展,建议最近的最先进的方法直接在特征级别解决问题。由于特征向量的位移不与像素位移不一致,因此常用方法是:将光流向神经网络向前传递到任务数据集上的微调该网络。利用这种方法,他们期望微调网络来产生编码特征级运动信息的张量。在本文中,我们重新思考此事实上的范式并分析了视频对象检测任务中的缺点。为了缓解这些问题,我们提出了一种具有视频对象检测的\ textBF {i} n-network \ textbf {f} eature \ textbf {f} eature \ textbf {f}低估计模块(iff模块)的新型网络(iff-net)。在不借鉴任何其他数据集的预先训练,我们的IFF模块能够直接产生\ textBF {feature flow},表示特征位移。我们的IFF模块由一个浅模块组成,它与检测分支共享该功能。这种紧凑的设计使我们的IFF-Net能够准确地检测对象,同时保持快速推断速度。此外,我们提出了基于\ Textit {自我监督}的转换剩余损失(TRL),这进一步提高了IFF-Net的性能。我们的IFF-Net优于现有方法,并在Imagenet VID上设置最先进的性能。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译